22 research outputs found

    RETURNN as a Generic Flexible Neural Toolkit with Application to Translation and Speech Recognition

    Full text link
    We compare the fast training and decoding speed of RETURNN of attention models for translation, due to fast CUDA LSTM kernels, and a fast pure TensorFlow beam search decoder. We show that a layer-wise pretraining scheme for recurrent attention models gives over 1% BLEU improvement absolute and it allows to train deeper recurrent encoder networks. Promising preliminary results on max. expected BLEU training are presented. We are able to train state-of-the-art models for translation and end-to-end models for speech recognition and show results on WMT 2017 and Switchboard. The flexibility of RETURNN allows a fast research feedback loop to experiment with alternative architectures, and its generality allows to use it on a wide range of applications.Comment: accepted as demo paper on ACL 201

    The QT21/HimL Combined Machine Translation System

    Get PDF
    This paper describes the joint submission of the QT21 and HimL projects for the English→Romanian translation task of the ACL 2016 First Conference on Machine Translation (WMT 2016). The submission is a system combination which combines twelve different statistical machine translation systems provided by the different groups (RWTH Aachen University, LMU Munich, Charles University in Prague, University of Edinburgh, University of Sheffield, Karlsruhe Institute of Technology, LIMSI, University of Amsterdam, Tilde). The systems are combined using RWTH’s system combination approach. The final submission shows an improvement of 1.0 BLEU compared to the best single system on newstest2016

    Alignment-based neural networks for machine translation

    No full text
    After more than a decade of phrase-based systems dominating the scene of machine translation, neural machine translation has emerged as the new machine translation paradigm. Not only does state-of-the-art neural machine translation demonstrate superior performance compared to conventional phrase-based systems, but it also presents an elegant end-to-end model that captures complex dependencies between source and target words. Neural machine translation offers a simpler modeling pipeline, making its adoption appealing both for practical and scientific reasons. Concepts like word alignment, which is a core component of phrase-based systems, are no longer required in neural machine translation. While this simplicity is viewed as an advantage, disregarding word alignment can come at the cost of having less controllable translation. Phrase-based systems generate translation composed of word sequences that also occur in the training data. On the other hand, neural machine translation is more flexible to generate translation without exact correspondence in the training data. This aspect enables such models to generate more fluent output, but it also makes translation free of pre-defined constraints. The lack of an explicit word alignment makes it potentially harder to relate generated target words to the source words. With the wider deployment of neural machine translation in commercial products, the demand is increasing for giving users more control over generated translation, such as enforcing or excluding translation of certain terms. This dissertation aims to take a step towards addressing controllability in neural machine translation. We introduce alignment as a latent variable to neural network models, and describe an alignment-based framework for neural machine translation. The models are inspired by conventional IBM and hidden Markov models that are used to generate word alignment for phrase-based systems. However, our models derive from recent neural network architectures that are able to capture more complex dependencies. In this sense, this work can be viewed as an attempt to bridge the gap between conventional statistical machine translation and neural machine translation. We demonstrate that introducing alignment explicitly maintains neural machine translation performance, while making the models more explainable by improving the alignment quality. We show that such improved alignment can be beneficial for real tasks, where the user desires to influence the translation output. We also introduce recurrent neural networks to phrase-based systems in two different ways. We propose a method to integrate complex recurrent models, which capture long-range context, into the phrase-based framework, which considers short context only. We also use neural networks to rescore phrase-based translation candidates, and evaluate that in comparison to the direct integration approach

    Vector Space Models for Phrase-based Machine Translation

    No full text
    This paper investigates the application of vector space models (VSMs) to the standard phrase-based machine translation pipeline. VSMs are models based on continuous word representations embed-ded in a vector space. We exploit word vectors to augment the phrase table with new inferred phrase pairs. This helps reduce out-of-vocabulary (OOV) words. In addition, we present a simple way to learn bilingually-constrained phrase vec-tors. The phrase vectors are then used to provide additional scoring of phrase pairs, which fits into the standard log-linear framework of phrase-based statistical ma-chine translation. Both methods result in significant improvements over a com-petitive in-domain baseline applied to the Arabic-to-English task of IWSLT 2013.

    On The Alignment Problem in Multi-Head Attention-Based Neural Machine Translation

    No full text

    Translation Modeling with Bidirectional Recurrent Neural Networks

    No full text
    This work presents two different trans-lation models using recurrent neural net-works. The first one is a word-based ap-proach using word alignments. Second, we present phrase-based translation mod-els that are more consistent with phrase-based decoding. Moreover, we introduce bidirectional recurrent neural models to the problem of machine translation, allow-ing us to use the full source sentence in our models, which is also of theoretical inter-est. We demonstrate that our translation models are capable of improving strong baselines already including recurrent neu-ral language models on three tasks: IWSLT 2013 German→English, BOLT Arabic→English and Chinese→English. We obtain gains up to 1.6 % BLEU and 1.7 % TER by rescoring 1000-best lists.
    corecore